Goto

Collaborating Authors

 fatal accident


China cracks down on 'autonomous' car claims after fatal accident

Engadget

Chinese authorities have banned automakers from using terms such as "smart driving" and "autonomous driving" for ads in the country, according to Reuters. The Ministry of Industry and Information Technology has tightened its rules for advertising driving assistance features following a fatal crash involving a Xiaomi SUV7 (pictured above), which raised concerns about the technology's safety. Based on Xiaomi's report, the vehicle's driving assistance mode was switched on when the vehicle was approaching a construction zone, but the driver took control right before the SUV collided with a concrete barrier. The electric vehicle went up in flames, with the accident claiming three lives. Back in 2022, the California DMV accused Tesla of falsely portraying its vehicles as fully autonomous based on the language it used on its website, though that didn't lead to a ban on advertising terms. Chinese authorities announced the new rule at a meeting attended by 60 representatives from the automobile industry.


An Explainable Machine Learning Approach to Traffic Accident Fatality Prediction

Rifat, Md. Asif Khan, Kabir, Ahmedul, Huq, Armana Sabiha

arXiv.org Artificial Intelligence

Road traffic accidents (RTA) pose a significant public health threat worldwide, leading to considerable loss of life and economic burdens. This is particularly acute in developing countries like Bangladesh. Building reliable models to forecast crash outcomes is crucial for implementing effective preventive measures. To aid in developing targeted safety interventions, this study presents a machine learning-based approach for classifying fatal and non-fatal road accident outcomes using data from the Dhaka metropolitan traffic crash database from 2017 to 2022. Our framework utilizes a range of machine learning classification algorithms, comprising Logistic Regression, Support Vector Machines, Naive Bayes, Random Forest, Decision Tree, Gradient Boosting, LightGBM, and Artificial Neural Network. We prioritize model interpretability by employing the SHAP (SHapley Additive exPlanations) method, which elucidates the key factors influencing accident fatality. Our results demonstrate that LightGBM outperforms other models, achieving a ROC-AUC score of 0.72. The global, local, and feature dependency analyses are conducted to acquire deeper insights into the behavior of the model. SHAP analysis reveals that casualty class, time of accident, location, vehicle type, and road type play pivotal roles in determining fatality risk. These findings offer valuable insights for policymakers and road safety practitioners in developing countries, enabling the implementation of evidence-based strategies to reduce traffic crash fatalities.


Quantifying Harm

Beckers, Sander, Chockler, Hana, Halpern, Joseph Y.

arXiv.org Artificial Intelligence

In a companion paper (Beckers et al. 2022), we defined a qualitative notion of harm: either harm is caused, or it is not. For practical applications, we often need to quantify harm; for example, we may want to choose the lest harmful of a set of possible interventions. We first present a quantitative definition of harm in a deterministic context involving a single individual, then we consider the issues involved in dealing with uncertainty regarding the context and going from a notion of harm for a single individual to a notion of "societal harm", which involves aggregating the harm to individuals. We show that the "obvious" way of doing this (just taking the expected harm for an individual and then summing the expected harm over all individuals can lead to counterintuitive or inappropriate answers, and discuss alternatives, drawing on work from the decision-theory literature.


NHTSA Opens Investigations Into Two New Fatal Tesla Accidents

#artificialintelligence

The National Highway Traffic Safety Administration (NHTSA) is currently looking into 16 crashes involving Tesla's electric cars. The main thing the NHTSA is focusing on is Tesla's Autopilot system, which is an advanced driver-assist suite that brings a few high-tech, semi-autonomous features. It looks like the NTHSA's investigation is going to expand, because there were a few more fatal accidents involving Tesla's EVs. According to Reuters, the NHTSA has opened an investigation into a recent fatal pedestrian crash in California involving a 2018 Tesla Model 3. The outlet states that an "advanced driver assistance system" was suspected to be in use when the accident occurred. The NHTSA mentioned the accident in an email update earlier this week.


Is Artificial Intelligence really 'intelligent'?

#artificialintelligence

When Artificial Intelligence was in its infancy it was quite natural to give it a sonorous name. It needed to attract money and talent. It has since become a mainstream subject that seeks to imitate human intelligence. See a recent definition: "Artificial Intelligence is the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages." Speech recognition, I remember the first steps.


Prosecutors Don't Plan to Charge Uber in Self-Driving Car's Fatal Accident

#artificialintelligence

Mr. Douma said prosecutors' announcement Tuesday tracked with how typically people, and not car manufacturers, are held responsible for crimes they commit behind the wheel. But, as autonomous vehicles become more sophisticated, he said, such cases raise questions about that way of thinking. "Is this driver, or was this driver, behaving in any way different than what most drivers are going to be behaving like when the car is doing this much driving?" he said. "It's a very conventional way of thinking to say we can expect and we should expect people to sit and monitor technology that is otherwise doing all the decision-making." The Yavapai County Attorney's Office did its review at the request of the Maricopa County Attorney's Office, which had a potential conflict of interest in the case because of an earlier partnership with Uber in a safety campaign.


10 biggest robotics stories of 2018

#artificialintelligence

Rethink Robotics' Sawyer (left) and Baxter collaborative robots. It was full of ups and downs, of course, but it will unfortunately be remembered more for the downs than anything. So before we turn our attention to 2019 trends to watch, let's recap the major robotics stories of 2018. Make sure to also check out our recap of the 10 most funded robotics companies of 2018. What will you remember most from this year?


Uber warned of self-driving safety risks before autonomous car killed pedestrian, report claims

Daily Mail - Science & tech

A fatal accident involving a pedestrian and one of Uber's self-driving vehicles could have been prevented, a new report claims. An employee warned the ride-sharing giant that there were issues with Uber's autonomous-driving technology just days before Elaine Herzberg, a 49-year-old Arizona woman, was struck and killed. The email, which was sent to several high-level executives at Uber, warned that the self-driving cars had been involved in several accidents, likely due to'poor behavior of the operator of the AV technology,' according to the Information. A fatal accident involving a pedestrian and one of Uber's self-driving vehicles could have been prevented, as a new report claims the firm was warned of safety issues ahead of the crash Robbie Miller, a manager in the testing-operations group, sent the email on March 13th. The crash, which involved Herzberg being hit by a manned autonomous vehicle in Tempe, Arizona, occurred just five days later on March 18th.


Uber resumes self-driving car tests, but only in manual mode

Engadget

Uber stopped all self-driving car tests following a fatal accident in Tempe, Arizona earlier this year, but today they're getting back on the road in limited fashion. The company says that it is taking a "first step" towards resuming autonomous car tests in Pittsburgh -- its vehicles will be on the road, but only in manual mode for now. Uber says that specially-trained "mission specialists" will be behind the driver seat in all cases; those drivers will be in control of their cars at all times. A second specialist will sit shotgun, recording any "notable events." While it may not be immediately obvious how having manual drivers helps the self-driving program, Uber notes that it lets them visualize more scenarios its vehicles will encounter in real time.


Uber Driver Was Streaming Hulu Show Just Before Self-Driving Car Crash

International Business Times

Police in Tempe, Arizona said evidence showed the "safety" driver behind the wheel of a self-driving Uber was distracted and streaming a television show on her phone right up until about the time of a fatal accident in March, deeming the crash that rocked the nascent industry "entirely avoidable." A 318-page report from the Tempe Police Department, released late Thursday in response to a public records request, said the driver, Rafaela Vasquez, repeatedly looked down and not at the road, glancing up just a half second before the car hit 49-year-old Elaine Herzberg, who was crossing the street at night. According to the report, Vasquez could face charges of vehicle manslaughter. Police said that, based on testing, the crash was "deemed entirely avoidable" if Vasquez had been paying attention. Police obtained records from Hulu, an online service for streaming television shows and movies, which showed Vasquez's account was playing the television talent show "The Voice" the night of the crash for about 42 minutes, ending at 9:59 p.m., which "coincides with the approximate time of the collision," the report says.